![]() device and three-dimensional object detection method
专利摘要:
THREE-DIMENSIONAL OBJECT DETECTION APPARATUS AND THREE-DIMENSIONAL OBJECT DETECTION METHOD. A three-dimensional object detection apparatus (1) is provided with a camera (10) and a computer (20). The computer (20) submits an image captured by the camera (10) to a viewing point conversion process to create an aerial view image, calculates the difference in brightness between two pixels near each position of a plurality of positions along of a virtual vertical line extending in the vertical direction in real space, and detects a three-dimensional object based on the continuity of the difference in brightness calculated at each position. 公开号:BR112013003851B1 申请号:R112013003851-9 申请日:2011-07-29 公开日:2021-03-09 发明作者:Chikao Tsuchiya;Hiroyuki Furushou;Shinya Tanaka;Yasuhisa Hayakawa 申请人:Nissan Motor Co., Ltd; IPC主号:
专利说明:
TECHNICAL FIELD [001] The present invention relates to a three-dimensional object detection apparatus and a three-dimensional object detection method. PREVIOUS TECHNIQUE [002] Conventionally, a three-dimensional object detection device has been proposed that detects horizontal edges or vertical edges in a real space from an aerial view obtained by submitting a captured image for conversion from the viewing point to the viewing point. aerial view, and then detects a three-dimensional object such as a vehicle when using the number of these edges. In this three-dimensional object detection device, the vertical edges in real space are projected and appear in the aerial view as a group of straight radial lines passing through a viewing point of a camera. Based on this knowledge, the three-dimensional object detection apparatus detects the three-dimensional object by detecting the vertical edges and then using the amount of the vertical edges (see Patent Document 1). PREVIOUS TECHNICAL DOCUMENT Patent Document Patent Document 1: Publication of Japanese patent application Hei 4-163249. SUMMARY OF THE INVENTION Technical problem [003] However, the three-dimensional object in the aerial view obtained by submitting the captured image for conversion from the viewing point to the aerial viewing point is stretched depending on its height. In this way, in the aerial view, an edge appearing in a high position of the three-dimensional object (an edge in a high position in real space) has a lower resolution than that of an edge in a low position of the three-dimensional object (an edge in a position low in real space). However, the width of the border in the low position of the three-dimensional object becomes smaller. [004] For this reason a problem occurs when edges are detected when using a three-pixel by three-pixel differential filter as in the technique described in Patent Document 1. With respect to an edge in a high position of a three-dimensional object, the edge, which is actually present, cannot be detected because of its low resolution. With respect to an edge in a low position of the three-dimensional object, the edge can be determined as a noise and not be detected as an edge because of its small width. Because of these reasons, the three-dimensional object detection apparatus described in Patent Document 1 has a problem like this in which the detection accuracy of the three-dimensional object is impaired. [005] The present invention was made to solve a problem like this of the related technique and its objective is to provide a three-dimensional object detection apparatus and a three-dimensional object detection method capable of improving an object's detection accuracy three-dimensional. Solution to the Problem [006] To solve the aforementioned problem, the present invention creates an aerial view image by performing processing viewpoint conversion processing on an image captured by an image capture device, then calculates, for each of a plurality of positions along an imaginary vertical line extending in a vertical direction in a real space, a difference in luminance between two pixels close to the position, and detects a three-dimensional object based on continuities of the calculated luminance differences of the respective positions . Effects of the Invention [007] In the present invention, when the image of the predetermined area is searched from the aerial view point of view, the vertical imaginary line extending in the vertical direction in real space is established and the three-dimensional object is detected based on the continuities of the luminance differences along the vertical imaginary line. Specifically, in the present invention, when the luminance differences are high, it is likely that the edge of the three-dimensional object exists in the part with the high luminance difference. Thus, the three-dimensional object can be detected based on the differences in continuous luminances. Particularly, since two pixels along the vertical imaginary line extending in the vertical direction in real space are compared with each other, the detection is not affected by a phenomenon in which the three-dimensional object is stretched depending on the height from of a road surface, the phenomenon caused by the conversion of the viewing point when converting the captured image to the aerial view image. In this way, in the present invention, the detection accuracy of the three-dimensional object can be improved. BRIEF DESCRIPTION OF THE DRAWINGS [008] Figure 1 is a schematic diagram of the configuration of a three-dimensional object detection device of a modality, which is a schematic block diagram showing an example where the three-dimensional detection device is mounted in a vehicle. [009] Figure 2 is a view showing an image capture area of a camera in a three-dimensional object detection device shown as a first modality, in which Part (a) is a top view showing a positional relationship. between detection areas and more, and Part (b) is a perspective view showing the positional relationship between detection areas and more in a real space. [010] Figure 3 is a block diagram showing a functional configuration of the three-dimensional object detection device shown as the first modality. [011] Figure 4 is a view showing operations of a luminance difference calculation part in the three-dimensional object detection apparatus shown as the first modality, in which Part (a) is a view showing a positional relationship between a line of attention, a reference line, a point of attention and a point of reference in an aerial view image, and Part (b) is a view showing a positional relationship between the line of attention, the line of reference point, attention point and reference point in real space. [012] Figure 5 is a view showing detailed operations of the luminance difference calculation part in the three-dimensional object detection apparatus shown as the first modality, in which Part (a) shows the detection area in the aerial view image. , and Part (b) is a view showing the positional relationship between the attention line, the reference line, the attention points and the reference points in the aerial view image. [013] Figure 6 is a view showing border lines and luminance distribution in each of the border lines, where Part (a) shows the border line and the luminance distribution in the border line. in the case where a three-dimensional object (vehicle) exists in the detection area, and Part (b) shows the border line and the distribution of luminances in the border line in the case where there is no three-dimensional object in the detection area. [014] Figure 7 is a flowchart showing operating procedures performed by the three-dimensional object detection device shown as the first modality. [015] Figure 8 is a flowchart showing operating procedures performed by the three-dimensional object detection device shown as the first modality. [016] Figure 9 is a view showing an example image to explain an edge detection operation in the three-dimensional object detection device shown as the first modality. [017] Figure 10 is a view showing detailed operations of a luminance difference calculation part in a three-dimensional object detection device shown as a second modality, in which Part (a) shows an area of detection in an aerial view image, and Part (b) is a view showing a positional relationship between an imaginary vertical line L, first reference points and second reference points in the aerial view image. [018] Figure 11 is a flow chart showing a complete operation in the three-dimensional object detection device shown as the second modality. [019] Figure 12 is a flowchart showing a detection operation for a vertical edge that is performed by the three-dimensional object detection device shown as the second modality. [020] Figure 13 is a different view showing the detailed operations of the luminance difference calculation part in the three-dimensional object detection apparatus shown as the second modality, in which Part (a) shows the detection area in the image of aerial view, and Part (b) is a view showing a positional relationship between the vertical imaginary line L, the first reference points and the second reference points in the aerial view image. [021] Figure 14 is an explanatory diagram of changing a threshold depending on the relationship between the vertical imaginary line, the first reference points and the second reference points, in the three-dimensional object detection device shown as the second modality. [022] Figure 15 is a flowchart showing a detection operation different from the vertical edge that is performed by the three-dimensional object detection device shown as the second modality. [023] Figure 16 is a block diagram showing a functional configuration of a three-dimensional object detection device shown as a third modality. [024] Figure 17 is a view showing operations of an edge intensity calculation part in the three-dimensional object detection apparatus shown as the third modality, in which Part (a) is a view showing a relationship between a detection area, a line of attention La and edge intensity in an aerial view image in which there is a three-dimensional object, and Part (b) is a view showing a relationship between the detection area, the line of attention La and the border intensity in an aerial view image in which there is no three-dimensional object. [025] Figure 18 is a flowchart showing operating procedures performed by the three-dimensional object detection device shown as the third modality. DESCRIPTION OF MODALITIES [026] Modalities of the present invention are described below based on the drawings. Figure 1 is a schematic diagram of the configuration of a three-dimensional object detection apparatus 1 of the modalities. In the embodiments, examples are shown in which the three-dimensional object detection apparatus 1 is mounted on a vehicle V1. As shown in figure 1, the three-dimensional object detection apparatus 1 includes a camera (image capture device) 10 and a calculator 20. [027] Camera 10 is attached to a rear end part of the V1 vehicle in a position at height h. The camera 10 is fixed in such a way that an optical axis of it is tilted down from the horizontal line by an angle θ1. Camera 10 captures an image of a predetermined area from this fixation position. Camera 10 provides the captured image for the calculator 20. The calculator 20 diagonally detects the presence and absence of a three-dimensional object behind the vehicle V1 when using the image provided by the camera 10. [028] Figure 2 is a view showing an image capture range and more from camera 10 shown in figure 1. Part (a) of figure 2 shows a top view. Part (b) of figure 2 shows a perspective view of a real space diagonally behind the vehicle V1. As shown in Part (a) of figure 2, camera 10 has a predetermined viewing angle a. Camera 10 captures an image of an area diagonally behind the vehicle V1 within the predetermined viewing angle a. The viewing angle a of camera 10 is established in such a way that not only a lane where the vehicle V1 is traveling, but also lanes adjacent to it are included in the image capture range of camera 10. [029] The calculator 20 performs various types of processing for parts in the detection areas A1, A2 of the three-dimensional object to be detected, in the image captured by camera 10. The calculator 20 thus determines whether the three-dimensional object (particularly, another vehicle V2) exists in detection areas A1, A2. Each of the detection areas A1, A2 has a trapezoidal shape in the top view. The position, size and shape of each of the detection areas A1, A2 are determined based on the distances d1 to d4. [030] The distance d1 is a distance from the vehicle V1 to a line of contact with the ground L1 or L2. The line of contact with the ground L1 or L2 is a line where the three-dimensional object existing in the lane adjacent to the lane in which the vehicle V1 is traveling comes in contact with the ground surface. One objective of the modality is to detect the other V2 vehicle and others (including two-wheel vehicles and others) that are located diagonally behind the V1 vehicle and are moving in the adjacent lane on the right or left side of the lane. vehicle V1. In this way, the distance d1 which is the distance to the position of the ground contact line L1 or L2 of the other vehicle V2 can be determined substantially fixed by means of a distance d11 from the vehicle V1 to a white line W and a distance d12 of the white line W to a position where the other V2 vehicle is expected to travel. [031] The distance d1 is not necessarily fixed and can be variable. In this case, the calculator 20 recognizes the position of the white line W in relation to the vehicle V1 by using a white line recognition technique and others and determines the distance d11 based on the recognized position of the white line W. The distance d1 is like this established variably when using the determined distance d11. [032] In the modality, since the position where the other vehicle V2 travels (the distance d12 from the white line W) and the position where the vehicle V1 travels (the distance d11 from the white line W) are approximately constant, it is assumed that the distance d1 is fixedly determined. [033] Distance d2 is a distance in a vehicle travel direction from the rear end of vehicle V1. The distance d2 is determined in such a way that at least the detection areas A1, A2 are included in the viewing angle a of camera 10. Particularly, in the modality, the distance d2 is established in such a way that each of the detection areas A1, A2 are in contact with an area defined by the viewing angle a. [034] The distance d3 is a distance showing the length of each of the detection areas A1, A2 in the direction of vehicle travel. The distance d3 is determined based on the size of the three-dimensional object to be detected. In the modality, since the other vehicle V2 and others are to be detected, the distance d3 is established for a length in which the other vehicle V2 can be included. [035] As shown in Part (b) of figure 2, the distance d4 is a distance indicating an established height to include tires from the other vehicle V2 and more in real space. The distance d4 is established for a length like this as shown in Part (a) of figure 2 in an aerial view image. It is preferable that the distance d4 is established for a length not including lanes (ie lanes which are two lanes away from the vehicle lane V1) adjacent to the adjacent right and left lanes on aerial view image. This is because of the following reason. When the lanes that are two lanes away from the vehicle lane V1 are included, it is impossible to determine whether the other vehicle V2 is in the adjacent lane on the right or left side of the lane in which the vehicle V1 is moving or the other vehicle V2 is in the lane which is two lanes away from it. [036] The distances d1 to d4 are determined as described above. The position, size and shape of each of the detection areas A1, A2 are thus determined. To be specific, the position of an upper edge b1 of each of the detection areas A1, A2 having the trapezoidal shape is determined from the distance d1. A start point position C1 of the upper edge b1 is determined from the distance d2. An end point position C2 of the upper edge b1 is determined from the distance d3. A side edge b2 of each of the detection areas A1, A2 having the trapezoidal shape is determined from a straight line L3 extending from the camera 10 to the starting point position C1. Similarly, a side edge b3 of each of the detection areas A1, A2 having the trapezoidal shape is determined from a straight line L4 extending from the camera 10 to the end point position C2. A lower edge b4 of each of the detection areas A1, A2 having the trapezoidal shape is determined from the distance d4. The areas surrounded by the edges b1 to b4 are referred to as the detection areas A1, A2. As shown in Part (b) of figure 2, the detection areas A1, A2 have quadrangular shapes (rectangular shapes) in the real space diagonally behind the vehicle V1. [037] In the modality, the detection areas A1, A2 have the trapezoidal shape in the aerial view point of view. However, the detection areas A1, A2 are not limited to what was previously indicated and may have other shapes such as quadrangular shapes in the aerial view point of view. [038] Figure 3 is a block diagram showing details of the calculator 20 shown in figure 1. In figure 3, camera 10 is also illustrated to clarify a connection relationship. [039] As shown in figure 3, the calculator 20 includes a viewing point conversion part (viewing point conversion device) 21, a luminance difference calculation part (luminance difference calculation device) 22, an edge line detection part (edge line detection device) 23 and a three-dimensional object detection part (three-dimensional object detection device) 24. The calculator 20 is a computer including a CPU, a RAM , a ROM and more. The calculator 20 implements functions of the viewing point conversion part 21, the luminance difference calculation part 22, the edge line detection part 23 and the three-dimensional object detection part 24 when performing image and image processing. according to a predefined program. [040] The viewpoint conversion part 21 receives captured image data from the predetermined area that is obtained when capturing image with camera 10. The viewpoint conversion part 21 performs viewpoint conversion processing on the data captured image data in which the captured image data is converted to aerial view image data in an aerial view state. The aerial view image state is a state where an image is obtained as if from a viewing point of a virtual camera looking at the vehicle vertically downwards (or slightly obliquely downwards), for example. Viewpoint conversion processing is performed using the technique described in Japanese patent application publication 2008-219063, for example. [041] The luminance difference calculation part 22 calculates a luminance difference in the aerial view image data submitted to the viewing point conversion by the viewing point conversion part 21 to detect an edge of the object three-dimensional image included in the aerial view image. The luminance difference calculation part 22 calculates, for each of the multiple positions along an imaginary vertical line extending in the vertical direction in real space, the luminance difference between two pixels close to the position. [042] The luminance difference calculation part 22 calculates the luminance difference when using either of a method of establishing an imaginary vertical line extending in the vertical direction in real space and a method of establishing two imaginary lines vertical. [043] A description of a specific method of establishing two vertical imaginary lines is given. In terms of the aerial view image submitted to the viewing point conversion, the luminance difference calculation part 22 establishes a first vertical imaginary line that corresponds to a line segment extending in the vertical direction in real space, and a second line vertical imaginary that is different from the first vertical imaginary line and that corresponds to a line segment extending in the vertical direction in real space. The luminance difference calculation part 22 obtains the luminance difference between points on the first vertical imaginary line and points on the second vertical imaginary line, continuously along the first vertical imaginary line and the second vertical imaginary line. This operation of the luminance difference calculation part 22 is described in detail below. [044] As shown in Part (a) of figure 4, the luminance difference calculation part 22 establishes the first vertical imaginary line La (hereinafter referred to as the attention line La) which corresponds to the line segment if extending in the vertical direction in real space and crossing the A1 detection area. The luminance difference calculation part 22 establishes the second imaginary vertical line Lr (hereinafter referred to as the reference line Lr) which is different from the attention line La and corresponds to the line segment extending in the vertical direction in real space and that goes through the detection area A1. The reference line Lr is established at a distance from the attention line La by a predetermined distance in real space. The lines corresponding to the line segments extending in the vertical direction in real space are lines extending radially from a Ps position of the camera 10 in the aerial view image. [045] The luminance difference calculation part 22 establishes a point of attention Pa on the line of attention La (point on the first vertical imaginary line). The luminance difference calculation part 22 establishes a reference point Pr in the reference line Lr (point in the second vertical imaginary line). [046] A relationship between the attention line La, the attention point Pa, the reference line Lr and the reference point Pr in the real space is as shown in Part (b) of figure 4. As it is apparent in Part (b) of figure 4, the attention line La and the reference line Lr extend in the vertical direction in real space. Attention point Pa and reference point Pr are established at almost the same height in real space. There is no need to establish the attention point Pa and the reference point Pr at precisely the same height and of course there is a tolerance for the extent to which the attention point Pa and the reference point Pr are supposed to be at substantially the same height. [047] The luminance difference calculation part 22 obtains the luminance difference between the attention point Pa and the reference point Pr. When the luminance difference between the attention point Pa and the reference point Pr is large, there is probably an edge between the attention point Pa and the reference point Pr. For this reason the edge line detection part 23 shown in figure 3 detects an edge line based on the luminance difference between the attention point Pa and the reference point Pr. [048] This operation is described with additional details. Figure 5 is a second view showing the detailed operation of the luminance difference calculation part 22 shown in figure 3. Part (a) of figure 5 shows the aerial view image taken from the aerial view point and Part (b) shows an enlarged partial view of the aerial view image shown in Part (a) of figure 5. Although only the detection area A1 is illustrated in figure 5, the luminance difference can be calculated in a similar way detection area A2. [049] When the other V2 vehicle is included in the image captured by camera 10, the other V2 vehicle appears in detection area A1 in the aerial view image as shown in Part (a) of figure 5. Assume that the line of attention La is established on a rubber part of the tire of the other vehicle V2 in the aerial view image as shown in Part (b) of figure 5 which is an enlarged view of a region B1 in Part (a) of figure 5. [050] In this state, the luminance difference calculation part 22 first establishes the reference line Lr. The reference line Lr is established in the vertical direction at a position distant from the attention line La by a predetermined distance in real space. Specifically, in the three-dimensional object detection apparatus 1 of the modality, the reference line Lr is established in a position distant from the attention line La by 10 cm in real space. In this way, in the aerial view image, the reference line Lr is established, for example, on a tire wheel of the other vehicle V2 that is 10 cm away from the rubber part of the tire of the other vehicle V2. [051] Next, the luminance difference calculation part 22 establishes the multiple points of attention Pa1 to PaN in the line of attention La. For example, in Part (b) of figure 5, the six points of attention Pa1 to Pa6 (with respect to an arbitrary point of attention, the point is referred to below simply as the Father point of attention) are established for convenience of description . The number of points of attention Pa established in the line of attention La can be any number. In the description that follows, description is given under the assumption that N points of attention Pa are established in the line of attention La. [052] Subsequently, the luminance difference calculation part 22 establishes the reference points Pr1 to PrN in such a way that the reference points Pr1 to PrN and the attention points Pa1 to PaN are located respectively at the same heights in space real. [053] Then, the luminance difference calculation part 22 calculates the luminance difference between the attention points Pa and the reference points Pr located at the same height. The luminance difference calculation part 22 thus calculates the luminance difference between two pixels for each of the multiple positions (1 to N) along the vertical imaginary line extending in the vertical direction in real space. For example, the luminance difference calculation part 22 calculates the luminance difference between the first point of attention Pa1 and the first reference point Pr1 and calculates the difference in luminance between the second point of attention Pa2 and the second point reference Pr2. The luminance difference calculation part 22 thus continuously obtains the luminance differences along the attention line La and the reference line Lr. The luminance difference calculation part 22 then sequentially obtains the luminance differences between the third to the umpteenth points of attention Pa3 to PaN and the third to the umpteenth reference points Pr3 to PrN. [054] The luminance difference calculation part 22 moves the attention line La in the detection area A1 and repeatedly performs the processing such as establishing the reference line Lr, establishing the attention points Pa and the reference points Pr, and calculation of luminance differences. Specifically, the luminance difference calculation part 22 changes the respective positions of the attention line La and the reference line Lr by the same distance in the direction extending from the ground contact line in real space, and repeatedly performs the described processing previously. For example, the luminance difference calculation part 22 establishes a line having been established as the reference line Lr in the previous processing as the attention line La, establishes the new reference line Lr for the new attention line La, and then sequentially obtains the luminance differences. [055] Referring to figure 3 again, the edge line detection part 23 de-roofs the edge line from continuous luminance differences calculated by the luminance difference calculation part 22. For example, in case shown in Part (b) of figure 5, since the first attention point Pa1 and the first reference point Pr1 are both located in the same tire part, the difference in luminance between them is small. However, the second to sixth points of attention Pa2 to Pa6 are located on the rubber part of the tire while the second to sixth reference points Pr2 to Pr6 are located on the wheel part of the tire. In this way, the luminance differences between the second to sixth points of attention Pa2 to Pa6 and the second to sixth reference points Pr2 to Pr6 are large. The edge line detection part 23 can thus detect the existence of the edge line between the second to sixth points of attention Pa2 to Pa6 and the second to sixth reference points Pr2 to Pr6 which have the large luminance differences between them. [056] Specifically, when detecting the border line, the border line detection part 23 first adds an attribute to the i Pai attention point from the luminance difference between the i Pai attention point (coordinates xi, yi)) and the reference point of order i Pri (coordinates (xi ', yi')), according to Formula (1) shown below. Formula 1: s (xi, yi) = 1 (When I (xi, yi)> I (xi ', yi') + t is satisfied) s (xi, yi) = -1 (1) (When I (xi , yi) <I (xi ', yi') - t is satisfied) s (xi, yi) = 0 (In cases other than those described above). [057] In formula (1) shown above, t represents a threshold. I (xi, yi) represents a luminance value of the attention point of order i Pai. I (xi ', yi') represents a luminance value of the order reference point i Pri. In the formula (1) shown above, when the luminance value of the Pai attention point is greater than a luminance value obtained by adding the threshold t to the luminance value of the reference point Pri, the attribute s (xi, yi) of the Dad attention point is “1”. However, when the luminance value of the Pai attention point is less than a luminance value obtained by subtracting the threshold t from the luminance value of the reference point Pri, the s (xi, yi) attribute of the Pai attention point is “ -1". When the relationship between the luminance value of the Pai attention point and the luminance value of the reference point Pri is one other than those previously described, the s (xi, yi) attribute of the Pai attention point is “0”. [058] Next, the edge line detection part 23 determines whether the attention line La is the border line of a continuity c (xi, yi) of the attributes s along the attention line La, based on formula (2) shown below. Formula 2: c (xi, yi) = 1 (When s (xi, yi) = s (xi + 1, yi + 1) is satisfied, excluding the case of 0 = 0) c (xi, yi) = 0 ( 2) (In cases other than those described above). [059] When the s (xi, yi) attribute of the Pai attention point and the s (xi + 1, yi + 1) attribute of the Pai + 1 attention point adjacent to each other, the continuity c ( xi, yi) is "1". When the s (xi, yi) attribute of the Pai attention point and the s (xi + 1, yi + 1) attribute of the Pai + 1 attention point adjacent to it do not coincide with each other, the continuity c (xi, yi) is "0". [060] Subsequently, the edge line detection part 23 obtains the sum of the continuities c (xi, yi) of all the attention points Pa in the attention line La. The edge line detection part 23 then normalizes the continuities c by dividing the sum obtained from the continuities c by the number N of the attention points Pa. The edge line detection part 23 determines that the attention line La is the line edge when the normalized value exceeds a threshold θ2. The threshold θ2 is a predefined value based on one experiment and others. [061] Specifically, the edge line detection part 23 determines whether the attention line La is the edge line based on the formula (3) shown below. Formula 3: ∑c (xi, yi) / N> θ2 (3) [062] The edge line detection part 23 then performs edge line determination for all attention lines La drawn in the detection area A1. [063] Referring to Figure 3 again, the three-dimensional object detection part 24 detects the three-dimensional object based on an amount of the edge lines detected by the edge line detection part 23. As previously described, the Three-dimensional object detection device 1 of the modality detects the border line extending in the vertical direction in real space. Detection of many edge lines extending in vertical detection means that it is likely that the three-dimensional object exists in the detection area A1 or A2. The three-dimensional object detection part 24 thus detects the three-dimensional object based on the number of edge lines detected by the edge line detection part 23. [064] In addition, before performing three-dimensional object detection, the three-dimensional object detection part 24 determines whether each of the edge lines detected by the edge line detection part 23 is an appropriate edge line. The three-dimensional object detection part 24 determines whether a change in luminance along the border line in the aerial view image on the border line is greater than a predetermined threshold. When the change in luminance in the aerial view image at the border line is greater than the threshold, the border line is determined to be detected by means of erroneous termination. However, when the luminance change in the aerial view image at the borderline is not greater than the threshold, the borderline is determined to be an appropriate borderline. The threshold is a predefined value based on one experiment and others. [065] Figure 6 is a view showing the luminance distribution of the edge lines. Part (a) of figure 6 shows the border line and luminance distribution in the case where the other vehicle V2 exists as the three-dimensional object in the detection area A1. Part (b) of figure 6 shows the border line and luminance distribution in the case where there is no three-dimensional object in the detection area A1. [066] Assume that the line of attention La established on the tire rubber part of the other vehicle V2 is determined to be the border line in the aerial view image as shown in Part (a) of figure 6. In this case, the luminance change in the aerial view image in the La attention line is moderate. This is because the image captured by camera 10 is subjected to conversion from the viewing point to the aerial view image (aerial viewing point) and the tire of the other vehicle V2 is thus stretched in the aerial view image. [067] However, assume that the line of attention La established in a white character part of “50” that is drawn on the road surface is erroneously determined as a border line in the aerial view image as shown in Part (b) of figure 6. In this case, the change in luminance in the aerial view image in the La attention line oscillates widely. This is because parts with high luminance corresponding to the white character part and parts with low luminance corresponding to the road surface and others are mixed at the edge line. [068] The edge line detection part 23 determines whether the edge line is detected by erroneous determination, based on the difference previously described in the luminance distribution in the La attention line. When the change in luminance along the border line is greater than the predetermined threshold, the three-dimensional object detection part 24 determines that the border line was detected by means of erroneous determination. Therefore, the borderline is not used for the detection of the three-dimensional object. This suppresses occurrence of the case where white characters such as "50" on the road surface, roadside grasses and more are determined as the border line and the detection accuracy of the three-dimensional object is thus reduced. [069] Specifically, the three-dimensional object detection part 24 calculates the luminance change of the border line when using any of Formulas (4) and (5) shown below. The change in luminance of the border line corresponds to an evaluation value in the vertical direction in real space. In the formula (4) shown below, the luminance distribution is evaluated by using a total value of squares of differences, each of which is a difference between the luminance value of order i I (xi, yi) and the value of luminance of order i + 1 I (xi + 1, yi + 1) adjacent to this in the line of attention La. In the formula (5) shown below, the luminance distribution is evaluated by using a total value of absolute difference values, each of which is the difference between the luminance value of order i I (xi, yi) and the value luminance of order i + 1 I (xi + 1, yi + 1) adjacent to this in the line of attention La. Formula 4: (The evaluation value in the direction corresponding to the verticality) = ∑ {{I (xi, yi) -I (xi + 1, yi + 1)} 2} ... (4) Formula 5: (The value in the direction corresponding to the verticality) = ∑ | I (xi, yi) -I (xi + 1, yi + 1) | ... (5) [070] The calculation is not limited only to the calculation using Formula (5) and can also be performed as follows. As shown in the formula (6) below, the attribute b of the adjacent luminance values can be binary using a threshold t2 and the binary attributes b for all points of attention Pa can be added. Formula 6: (The evaluation value in the direction corresponding to the verticality) = ∑b (xi, yi) (6) where, b (xi, yi) = 1 (when | I (xi, yi) -I (xi + 1 , yi + 1) |> t2) b (xi, yi) = 0 (in cases other than those described above). [071] When the absolute value of the luminance difference between the luminance value of the Pai attention point and the luminance value of the reference point Pri is greater than the t2 threshold, the attribute b of the Pai attention point (xi , yi) is “1”. When the relationship between the absolute value and the t2 threshold is not that, the attribute b (xi, yi) of the Pai attention point is “0”. The threshold t2 is predefined based on one experiment and others to determine that the line of attention La is not located on the same three-dimensional object. The three-dimensional object detection part 24 then adds the attributes b for all points of attention Pa in the attention line La, obtains the assessment value in the direction corresponding to the verticality, and determines whether the border line is an appropriate border line . [072] In the following, a method of detecting a three-dimensional object of the modality is described. Figures 7 and 8 are flowcharts showing details of the modality's three-dimensional object detection method. In figures 7 and 8, processing description is given for detection area A1, for convenience. However, similar processing can also be performed for detection area A2. [073] As shown in figure 7, first, in step S1, camera 10 captures the image of the predetermined area defined by the viewing angle a and the fixation position. [074] Then, in step S2, the luminance difference calculation part 22 receives the image data captured by camera 10 in step S1 and generates the aerial view image data when performing the viewpoint conversion. [075] Then, in step S3, the luminance difference calculation part 22 establishes the line of attention La in the detection area A1. This time, the luminance difference calculation part 22 establishes a line corresponding to a line extending in the vertical direction in real space as the attention line La. [076] Subsequently, in step S4, the luminance difference calculation part 22 establishes the reference line Lr in the detection area A1. This time, the luminance difference calculation part 22 establishes a line that corresponds to a line segment extending in the vertical direction in real space and that is distant from the line of attention La by the predetermined distance in real space as the line reference Lr. [077] Then, in step S5, the luminance difference calculation part 22 establishes multiple points of attention Pa in the attention line La. This time, the luminance difference calculation part 22 establishes the appropriate number of points of attention Pa in such a way that there is no problem in determining the edge by the edge line detection part 23. [078] In addition, in step S6, the luminance difference calculation part 22 establishes the reference points Pr in such a way that each of the attention points Pa and a corresponding reference point of the reference points Pr they are established at almost the same height in real space. Each attention point Pa and the corresponding reference point Pr are thus arranged in an almost horizontal direction. This facilitates the detection of the edge line extending vertically in real space. [079] Then, in step S7, the luminance difference calculation part 22 calculates the luminance difference between each attention point Pa and the corresponding reference point Pr that are at the same height in real space. [080] The borderline detection part 23 then calculates the s attribute of each attention point Pa according to Formula (1) shown above. Subsequently, in step S8, the edge line detection part 23 calculates the continuities c of the attributes s of the points of attention Pa according to Formula (2) shown above. [081] Next, in step S9, the edge line detection part 23 determines whether the value obtained by normalizing the sum of the continuities c is greater than the threshold θ, according to Formula (3) shown above. When the normalized value is determined to be greater than the threshold θ (S9: YES), the edge line detection part 23 detects the attention line La as the edge line in step S10. Then, processing proceeds to step S11. When the normalized value is determined to be no greater than the threshold θ (S9: NO), the edge line detection part 23 does not detect the attention line La as the edge line and processing proceeds to step S11. [082] In step S11, the calculator 20 determines whether the processing of steps S3 to S10 described above is performed for all lines of attention La that can be established in detection area A1. When calculator 20 determines that processing is not performed for all lines of attention La (S11: NO), processing is returned to step S3 and a new line of attention La is established. The processing for step S11 is then repeated. However, when the calculator 20 determines that the processing is performed for all lines of attention La (S11: YES), the processing proceeds to the step S12 of figure 8. [083] In step S12 of figure 8, the three-dimensional object detection part 24 calculates the luminance change along each of the edge lines detected in step S10 of figure 7. The three-dimensional object detection part 24 calculates the luminance change of the border line according to any of Formulas (4), (5) and (6) shown above. [084] Then, in step S13, the three-dimensional object detection part 24 excludes, between the edge lines, an edge line whose luminance change is greater than the predetermined threshold. Specifically, the three-dimensional object detection part 24 determines that the border line having the large luminance change is not an appropriate border line and does not use the border line for three-dimensional object detection. As previously described, this is to prevent characters on the road surface, grasses on the roadside and more that are included in detection area A1 from being detected as the border line. In this way, the predetermined value is a value obtained from one experiment and others more in advance and established on the basis of changes in luminance that occur because of characters on the road surface, grasses on the side of the road and others. [085] Subsequently, in step S14, the three-dimensional object detection part 24 determines whether the number of edge lines is equal to or greater than a predetermined value. The predetermined value described above is a value obtained from one experiment and others, and established in advance. For example, when a four-wheel vehicle is defined with the three-dimensional object to be detected, the default value is established based on the number of the edge lines of the four-wheel vehicle that appeared in the detection area A1 in the experiment and in others more executed in advance. [086] When the number of edge lines is determined to be equal to or greater than the predetermined value (S14: YES), the three-dimensional object detection part 24 detects the existence of the three-dimensional object in the detection area A1 in step S15. However, when the number of edge lines is determined not to be equal to or greater than the predetermined value (S14: NO), the three-dimensional object detection part 24 determines that there is no three-dimensional object in the detection area A1. Then, the processing shown in figures 7 and 8 is completed. [087] As previously described, in the three-dimensional object detection apparatus 1 of the modality, the imaginary vertical lines that are the line segments extending in the vertical direction in real space are established in the aerial view image in order to detect the three-dimensional object in the detection area A1 or A2. Then, the three-dimensional object detection apparatus 1 can calculate, for each of the multiple positions along the vertical imaginary lines, the luminance difference between two pixels close to the position and determine the presence and absence of the three-dimensional objects based on the continuities of the luminance differences. [088] Specifically, the three-dimensional object detection apparatus 1 establishes the line of attention La which corresponds to the line segment extending in the vertical direction in real space and the reference line Lr which is different from the line of attention La, for each one of the detection areas A1, A2 in the aerial view image. The three-dimensional object detection device 1 continuously obtains the luminance differences between the attention points Pa in the attention line La and the reference points Pr in the reference line Lr, along the attention line La and the reference line Lr . The luminance difference between the attention line La and the reference line Lr is obtained by continuously obtaining the luminance differences between the points as previously described. When the luminance difference between the attention line La and the reference line Lr is high, it is likely that the edge of the three-dimensional object exists in a part where the attention line La is established. The three-dimensional object detection apparatus 1 can thus detect the three-dimensional object based on the differences in continuous luminances. Particularly, since the comparison of luminances between the vertical imaginary lines extending in the vertical direction in real space is performed, the detection processing of the three-dimensional object is not affected even when the three-dimensional object is stretched depending on the height from the road surface because of conversion to the aerial view image. In this way, in the three-dimensional object detection apparatus 1, the detection accuracy of the three-dimensional object can be improved. [089] In addition, in the three-dimensional object detection device 1, the difference in luminance between two points at the same height close to the vertical imaginary line is obtained. Specifically, the luminance difference is obtained from each attention point Pa in the attention line La and from the corresponding reference point Pr in the reference line Lr that are at the same height in real space. In this way, the three-dimensional object detection apparatus 1 can clearly detect the difference in luminance in the case where there is an edge extending in the vertical direction. [090] In addition, the three-dimensional object detection device 1 adds the attribute to each attention point Pa in the attention line La based on the difference in luminance between the attention point Pa and the corresponding reference point Pr in the reference line Lr, and determines whether the attention line La is the border line based on the continuities c of the attributes along the attention line La. In this way, the three-dimensional object detection apparatus 1 can detect a boundary between a region with high luminance and a region with low luminance as the border line and perform border detection close to that of natural human senses. [091] This effect is described in detail. Figure 9 is a view showing an example image to explain the processing of the edge line detection part 23 shown in figure 3. In this example image, a first stripe pattern 101 having a stripe pattern in which regions with luminance high and low luminance regions are alternated and a second streak pattern 102 having a stripe pattern in which low luminance regions and high luminance regions are alternated are adjacent to each other. In addition, in this image example, the regions with high luminance in the first pattern of stripes 101 and the regions with low luminance in the second pattern of stripes 102 are adjacent to each other while the regions with low luminance in the first pattern of stripes 101 and the regions with high luminance in the second streak pattern 102 are adjacent to each other. A portion 103 located at a boundary between the first stripe pattern 101 and the second stripe pattern 102 tends not to be recognized as an edge by human senses. [092] On the contrary, since the regions with low luminance and the regions with high luminance are adjacent to each other, part 103 is recognized as an edge when the edge is detected when using only the luminance difference. However, the edge line detection part 23 determines part 103 as an edge line only when luminance differences are detected in part 103 and there is continuity in the luminance difference attributes. In this way, the edge line detection part 23 can suppress such an erroneous determination that part 103 that is not recognized as an edge by human senses is recognized as an edge line, and perform edge detection close to that of human senses . [093] In addition, when the luminance change of the edge line detected by the edge line detection part 23 is greater than the predetermined threshold, the three-dimensional object detection apparatus 1 determines that the edge line is detected by means of erroneous determination. When the captured image obtained by camera 10 is converted to the aerial view image, the three-dimensional object included in the captured image tends to appear in the aerial view image in a stretched state. For example, consider a case where the tire of the other vehicle V2 is stretched as described above. In this case, since the single part of the tire is stretched, the change in luminance in the aerial view image in the stretched direction tends to be small. On the other hand, when characters drawn on the road surface and others are erroneously determined as a border line, regions with high luminance such as parts of characters and regions with low luminance such as parts of road surface are included together in the aerial view image. In this case, the change in luminance in the stretched direction tends to be large in the aerial view image. In this way, the three-dimensional object detection apparatus 1 can recognize the detected edge line by means of erroneous determination when determining the luminance change in the aerial view image along the edge line. The accuracy of three-dimensional object detection can thus be improved. [094] Next, a three-dimensional object detection apparatus 1 of a second embodiment is described. It should be noted that parts that are similar to those of the first modality described above are denoted by the same reference numbers and their detailed description is omitted in this way. [095] The three-dimensional object detection device 1 shown as the second mode is different from the first mode in which the three-dimensional object detection device 1 detects a three-dimensional object by establishing an imaginary vertical line in an aerial view image. The three-dimensional object detection device 1 calculates a luminance difference between two pixels equidistant from the vertical imaginary line in a real space with a luminance difference calculation part 22. [096] Specifically, as shown in Part (a) of figure 10, a vertical imaginary line L extending from a position Ps of a camera 10 in a vertical direction in real space is established. Although only one vertical imaginary line L is shown in figure 10, multiple vertical imaginary lines L are established radially in a detection area A1. An area B1 that is a part of the detection area A1 is shown in Part (b) of figure 10 in an enlarged mode. In figure 10, only the detection area A1 is described for convenience of description. However, similar processing is also performed for an A2 detection area. [097] As shown in Part (b) of figure 10, the luminance difference calculation part 22 establishes pairs of pixels respectively on both sides of the vertical imaginary line L in a horizontal direction in real space. Specifically, the first reference points Pa1 to Pa6 (hereinafter referred to simply as the first reference point Pai when referring to an arbitrary point) are established at positions distant from the vertical imaginary line L in the horizontal direction in real space and the second reference points Pb1 to Pb6 are established on the opposite side of the vertical imaginary line L for the first reference points. To be more specific, the first reference points Pa1 to Pa6 and the second reference points Pb1 to Pb6 (hereinafter referred to as the second reference point Pbi when referring to an arbitrary point) are established in lines extending radially from the Ps position of the camera 10 as well as the vertical imaginary line L. [098] The luminance difference calculation part 22 performs establishment in such a way that the distances between the first reference point Pai and the second reference point Pbi in real space are the same. Thus, in the aerial view image, the distance in the image increases in the order of a distance D1 between pixel Pa1 and pixel Pb1, a distance D2 between pixel Pa2 and pixel Pb2, a distance D3 between pixel Pa3 and the pixel Pb3, a distance D4 between pixel Pa4 and pixel Pb4, a distance D5 between pixel Pa5 and pixel Pb5, and a distance D6 between pixel Pa6 and pixel Pb6. [099] The luminance difference calculation part 22 thus establishes the pixel pairs Pai and Pbi which are almost at the same height in real space at positions close to the vertical imaginary line L and which are equidistant from the vertical imaginary line L in real space . The luminance difference calculation part 22 calculates the luminance difference between each pair of Pai and Pbi pixels. The luminance difference calculation part 22 thus calculates, for each of the multiple positions along the vertical imaginary line L extending in the vertical direction in the real space, the luminance difference between the pair of pixels close to the position. [0100] An edge line detection part 23 detects an edge line when using the luminance differences calculated by the luminance difference calculation part 22. A three-dimensional object detection part 24 detects the three-dimensional object when using the edge line detected by the edge line detection part 23. [0101] The following is a description of the operations of the three-dimensional object detection apparatus 1 described above with reference to figures 11 and 12. [0102] In figure 11, first in step S41, image data captured by camera 10 is loaded into a calculator 20. [0103] In the subsequent step S42, a viewpoint conversion portion 21 performs viewpoint conversion processing on the captured image data loaded in step S41. The viewpoint conversion portion 21 thus creates aerial view image data. [0104] In the subsequent step S43, the luminance difference calculation part 22 and the edge line detection part 23 detect an edge line (left vertical edge) when using the first Pai reference point (reference point reference) established on the left side of the vertical imaginary line L. In the subsequent step S44, the luminance difference calculation part 22 and the edge line detection part 23 detect an edge line (vertical right edge) at the use the second reference points Pbi (right reference points) established on the right side of the vertical imaginary line L. Processing in steps S43 and S44 is described later with reference to figure 12. [0105] In the subsequent step S45, the three-dimensional object detection part 24 detects the three-dimensional object in the detection area A1 by using the left vertical edge detected in step S43 and the right vertical edge detected in step S44. [0106] Next, processing of detecting the left vertical edge and the right vertical edge is described with reference to figure 12. In the description of figure 12, the left vertical edge and the right vertical edge are collectively referred to as “vertical edge ”. [0107] First, in step S51, the luminance difference calculation part 22 establishes the vertical imaginary line L serving as a reference to establish the first reference point Pai and the second reference point Pbi. The vertical imaginary line L is established to extend in a radial direction passing through the Ps position of the camera 10 and to extend in the vertical direction in real space. Each time the processing of step S51 is performed, the vertical imaginary line L is established to pass as a line on the inner side of the detection area A1 at a predetermined interval. [0108] In the subsequent step S52, an attribute s, a previous value s_pre of attribute s, a counter d for the number of times of change and a score count n are initialized. [0109] In the subsequent step S53, the luminance difference calculation part 22 establishes the first reference point Pai and the second reference point Pbi. This time, as shown in Part (b) of figure 10, the luminance difference calculation part 22 establishes the reference points in such a way that the reference points are respectively on both sides of the vertical imaginary line L and are equidistant from the imaginary vertical line L at the same height in real space. The luminance difference calculation part 22 thus establishes the first reference point Pai and the second reference point Pbi in such a way that the distance between them becomes greater towards the outside in the detection area A1. [0110] In the subsequent step S54, determination of the attribute s (luminance pattern) is performed for each of the positions in the vertical imaginary line L. This time, the luminance difference calculation part 22 obtains the luminance difference between the first reference point Pai and the second reference point Pbi. The edge line detection part 23 establishes the attribute s (luminance pattern) according to the luminance difference ratio obtained by the luminance difference calculation part 22 and the Formula (1) shown above. This attribute s is an attribute s of a position where a line segment connecting the first reference point Pai and the second reference point Pbi crosses the vertical imaginary line L. [0111] When the luminance of the first reference point Pai is greater than that of the second reference point Pbi by a threshold t or more, the attribute s is “1”. However, when the luminance value of the first reference point Pai is less than a luminance value obtained by subtracting the threshold t from the luminance value of the second reference point Pbi, the attribute s is “-1”. When the relation between the luminance value of the first reference point Pai and the luminance value of the second reference point Pbi is one other than those previously described, the attribute s is “0”. [0112] In the subsequent step S55, the edge line detection part 23 adds a score n only when the attribute s is a predetermined value in step S54. The default value of attribute s can be “1” or “-1”. In other words, the edge line detection part 23 counts a total number of times that one of the first reference point Pai and the second reference point Pbi is brighter or darker than the other. The edge line detection part 23 does not increase the score n when the attribute s is "0". [0113] In the subsequent step S56, the edge line detection part 23 counts the number of times d of changes in attribute s. This time, the edge line detection part 23 compares the attribute s determined in the most recent step S54 with the attribute s_pre determined in the previous step S54. The s_pre attribute is an attribute s obtained from the first reference point Pai and the second reference point Pbi which are adjacent to the first reference point Pai and the second reference point Pbi used to obtain the attribute s along the vertical imaginary line L When the value of the attribute s and the value of the attribute s_pre are equal, the number of times of changes is increased. [0114] In the subsequent step S57, the edge line detection part 23 stores the attribute s. [0115] In the subsequent step S58, the edge line detection part 23 determines whether the processing of steps S53 to S57 is performed for all reference points established for the vertical imaginary line L such as the reference line established in step S51. When processing is not performed for all reference points, processing returns to step S53. When step S53 is repeated, the next reference points are established. However, when the edge line detection part 23 determines that processing is performed for all reference points, processing proceeds to step S59. In step S59, the edge line detection part 23 determines whether the processing of steps S52 to S58 is performed for all the vertical imaginary lines L established in the detection area A1. When the edge line detection part 23 determines that processing is not performed for all imaginary vertical lines L, processing returns to step S51. When step S51 is repeated, the next vertical imaginary line L is established. However, when the edge line detection part 23 determines that processing is performed for all imaginary vertical lines L, processing proceeds to step S60. [0116] In step S60, the edge line detection part 23 determines the vertical edge appearing in the detection area A1. This time, the edge line detection part 23 determines the vertical edge based on the score n indicating the number of times determination of the same attribute s, in a total number N of pairs of the first reference point Pai and the second point of reference Pbi, and the number of times d of changes. Specifically, the edge line detection part 23 determines that the vertical imaginary line L is the vertical edge when scores n / total number N> θ and 5> d are both satisfied. [0117] In the score n / total number N, the score n is normalized by dividing the score n by the total number of reference points. When the proportion of the number of times the relationships between the first reference point Pai and the second reference point Pbi are determined to be the same (light or dark) for the total number N of the reference points is large, it can be assumed that the vertical border exists. [0118] Furthermore, when the number of times of changes is less than an upper limit value (five in this example), the edge line detection part 23 can assume that the vertical imaginary line L is the vertical edge. The upper limit value of the number of times of changes is established in consideration of the fact that an object in which the attribute s changes frequently in detection area A1 is likely to be any of the roadside herbs, symbols and characters in the road surface and more. In this way, the upper limit is predefined based on one experiment and others more in such a way that the herbs, symbols and characters on the road surface and more are not determined to be the vertical border. [0119] However, when the conditions described above are not met, the edge line detection part 23 determines that the vertical imaginary line L is not the vertical edge. [0120] As previously described, and as in the first modality, the three-dimensional object detection device 1 establishes the vertical imaginary line extending in the vertical direction in real space and detects the three-dimensional object based on continuities of the luminance differences. In this way, the detection accuracy of the three-dimensional object can be improved. [0121] In addition, in the three-dimensional object detection apparatus 1, two points of the first reference point Pai and the second reference point Pbi are established for an imaginary vertical line L and the distance in the image between the first reference point Parent and the second reference point Pbi is defined as the distance in real space. In this way, the three-dimensional object detection apparatus 1 can detect the three-dimensional object by detecting the vertical edge without providing two lines of attention line La and reference line Lr as in the first embodiment. Consequently, the processing load can be reduced in the three-dimensional object detection apparatus 1 when compared to that of the first modality. [0122] The following is a description of another three-dimensional object detection device 1 shown as the second modality. This three-dimensional object detection apparatus 1 shown as the second embodiment is the same as the three-dimensional object detection apparatus 1 described above in which only one vertical imaginary line L is established. As shown in Part (b) of Figure 13 which shows a part of an aerial view image shown in Part (a) of Figure 13 in an enlarged mode, the three-dimensional object detection apparatus 1 establishes all distances between the first Pai reference points and the second Pbi reference points that are established in the aerial view image along the vertical imaginary line L for the same distance D. [0123] Specifically, as shown in figure 14, the luminance difference calculation part 22 establishes the vertical imaginary line L to extend radially from the camera's Ps position 10. However, the luminance difference calculation part 22 establishes an imaginary line I1 on which the first reference point Pai is arranged and an imaginary line I2 on which the second reference point Pbi is arranged in such a way that the imaginary lines I1 and I2 are parallel to the vertical imaginary line L. In the imaginary lines I1 and I2, the reference points Pa1 and Pb1 in the detection area A1 that are closest to the vertical imaginary line L are established in positions where the imaginary lines I1 and I2 cross radial imaginary lines I extending in the vertical direction of the Ps position of the camera 10. [0124] When the first reference point Pai and the second reference point Pbi are established as described above, both the distance between the first reference point Pa1 and the second reference point Pb1 and the distance between the first point reference Pa2 and the second reference point Pb2 are equal to d. However, the distance between the points Pa 'and Pb' that are on the imaginary radial line I extending from the camera's position Ps and that are on a line connecting the first reference point Pa2 and the second reference point Pb2 are equal to d 'which is greater than d. The threshold t for determining the attribute s is thus established to be lower towards the outside in the detection area A1 (as the distance from the vehicle V1 becomes greater). Specifically, the threshold t to determine the attribute s of the first reference point Pa1 and the second reference point Pb1 is set to a value greater than a threshold t 'to determine the attribute s of the first reference point Pa2 and the second reference point Pb2. In addition, t 'is defined as t * (d / d'). The edge line detection part 23 thus determines the attribute s of each position in the vertical imaginary line L when performing a calculation similar to that of Formula (1) shown above. [0125] In other words, the edge line detection part 23 establishes the threshold t for each pair of the first reference point Pai and the second reference point Pbi. So, when the luminance value of the first reference point Pai is greater than the luminance value obtained by adding the threshold t to the luminance value of the second reference point Pbi, the attribute s (xi, yi) is “1”. However, when the luminance value of the first reference point Pai is less than the luminance value obtained by subtracting the threshold t from the luminance value of the second reference point Pbi, the attribute s (xi, yi) is “-1” . When the relationship between the luminance value of the first reference point Pai and the luminance value of the second reference point Pbi is one other than those previously described, the attribute s (xi, yi) is “0”. [0126] The three-dimensional object detection apparatus 1 described earlier detects the vertical edge when performing operations shown in figure 15. The operations of the three-dimensional object detection apparatus 1 are different from the operations shown in figure 12 when having the step S53 'in place of S53. [0127] In step S53 ', the luminance difference calculation part 22 establishes the first reference point Pai and the second reference point Pbi respectively on the imaginary lines I1 and I2 provided parallel to the vertical imaginary line L. In addition, the luminance difference calculation part 22 establishes the threshold t for each pair of the first established Father reference point and the second established Pbi reference point. Thus, in step S54, the edge line detection part 23 compares the luminance difference between the first reference point Pai and the second reference point Pbi and the threshold t established for this pair of the first reference point Pai and the second reference point Pbi and determines the attribute s. [0128] As described above, and as in the first modality, the three-dimensional object detection apparatus 1 establishes the vertical imaginary line extending in the vertical direction in real space and detects the three-dimensional object based on the continuities of the luminance differences. In this way, the detection accuracy of the three-dimensional object can be improved. [0129] In addition, the three-dimensional object detection device 1 calculates the luminance difference between the two equidistant pixels of the vertical imaginary line L in the aerial view image and establishes the threshold for determining the three-dimensional object based on the difference in luminance in such a way that the threshold becomes lower as the position between the multiple positions along the vertical imaginary line L in real space becomes higher. In this way, even when an image from a higher part in the real space is stretched due to the conversion of the viewing point of a capped image, the three-dimensional object detection device 1 can change the threshold and detect the edge . In addition, the three-dimensional object detection apparatus 1 can detect the three-dimensional object by detecting the vertical edge, without providing two lines of attention line La and reference line Lr as in the first embodiment. Consequently, the processing load can be reduced in the three-dimensional object detection apparatus 1 when compared to that of the first modality. [0130] It should be noted that, even when the three-dimensional object detection apparatus 1 detects the difference in luminance between two pixels on both sides of the vertical imaginary line while changing the position on the vertical imaginary line with the threshold being fixed, the three-dimensional object detection apparatus 1 can still detect the border line extending in the vertical direction and detect the three-dimensional object. [0131] In the following, a three-dimensional object detection apparatus 1 of a third modality is described. It should be noted that parts that are similar to those of the modalities described above are denoted by the same reference numbers and their detailed description is omitted in this way. [0132] Figure 16 is a block diagram showing a functional configuration of a calculator 20 in the three-dimensional object detection apparatus 1 of the third modality. In figure 16, camera 10 is also illustrated to clarify the connection relationship. [0133] As shown in figure 16, the calculator 20 includes an edge intensity calculation part (edge intensity calculation device) 25 instead of the edge line detection part 23 of the first embodiment. [0134] The edge intensity calculation part 25 calculates an edge intensity of a line of attention La from continuous luminance differences calculated by a luminance difference calculation part 22. Edge intensity is a numerical value indicating probability of being an edge line. Specifically, the edge intensity is calculated by the formula (7) shown below. Formula 7: ∑c (xi, yi) / N (7) [0135] In the formula (7) shown above, c (xi, yi) is a continuity c of an attribute of a Pai attention point. N is the number of attention points Pa established in the attention line La. In formula (7), the edge intensity is a value obtained by dividing the sum of the continuities c of each attention line La by the established number of attention points Pa. [0136] Figure 17 is a schematic view showing processing performed by the edge intensity calculation part 25 shown in figure 16. Part (a) of figure 17 shows the edge intensity in the case where another V2 vehicle as an object tridi-mensional exists in an A1 detection area. Part (b) of figure 17 shows the edge intensity in the case where there is no three-dimensional object in the detection area A1. Although the edge intensity calculation part 25 is described in combination with an illustration only of the detection area A1 in figure 17, similar processing can also be performed for a detection area A2. [0137] As shown in Part (a) of figure 17, when the other vehicle V2 exists in detection area A1, the sum of continuities and attributes of points of attention Pa are high as described in Formula (7) shown above and the edge intensity of each La attention line thus tends to be high. In this way, the sum of multiple edge intensities included in the detection area A1 is high. [0138] However, as shown in Part (b) of figure 17, when there is no three-dimensional object in detection area A1, the sum of continuities and attributes of points of attention Pa is low as described in Formula (7) shown above and the edge intensity of each La attention line thus tends to be low. In this way, the sum of multiple edge intensities included in the detection area A1 is low. [0139] As previously described, when the sum of the edge intensities of the attention lines La is equal to or greater than a predetermined threshold, a three-dimensional object detection part 24 can determine the existence of the three-dimensional object in the detection area A1. Incidentally, depending on an image capture environment for the three-dimensional object and others, an edge extending in a vertical direction in real space sometimes appears as a faint edge in an aerial view image. In this case, the three-dimensional object detection apparatus 1 may not be able to detect the three-dimensional object. However, since the three-dimensional object detection apparatus 1 of the third modality detects the three-dimensional object based on the edge intensities, the three-dimensional object detection apparatus 1 can detect the three-dimensional object even when only weak edges appear in the view image. when collecting the weak edges in a large quantity. [0140] Figure 18 is a flowchart showing details of a three-dimensional object detection method of the third modality. In figure 18, processing description for detection area A1 is given for convenience. However, similar processing can be performed for detection area A2. [0141] In the processing of steps S21 to S28, the three-dimensional object detection device 1 first performs processing similar to that of steps S1 to S8 shown in figure 7. [0142] In step S29 subsequent to step S28, the edge intensity calculation part 25 calculates the edge intensity according to Formula (7) shown above. [0143] Then, in step S30, the calculator 20 determines whether the edge intensities are calculated for all lines of attention La that can be established in the detection area A1. It is determined whether the border intensities are calculated for all lines of attention La that can be established in the detection area A1 for all lines of attention La. When the calculator 20 determines that the edge intensities are not calculated for all lines of attention La (S30: NO), processing returns to step S23. On the other hand, when the calculator 20 determines that the edge intensities are calculated for all lines of attention La (S30: YES), the processing proceeds to step S31. [0144] In step S31, the three-dimensional object detection part 24 calculates the sum of the edge intensities calculated by the edge intensity calculation part 25. [0145] Then, in step S32, the three-dimensional object detection part 24 de-terminates if the sum of the border intensities calculated in step S31 is equal to or greater than the threshold. When the sum of the border intensities is determined to be equal to or greater than the threshold (S32: YES), the three-dimensional object detection part 24 detects the existence of the three-dimensional object in the detection area A1 in step S33. However, when the sum of the border intensities is determined not to be equal to or greater than the threshold (S32: NO), the three-dimensional object detection part 24 determines that there is no three-dimensional object in the detection area A1. Then, the processing shown in figure 18 is completed. [0146] As previously described, in the three-dimensional object detection apparatus 1 and in the third-mode three-dimensional object detection method, as well as in the first modality, the vertical imaginary line extending in the vertical direction in real space is established and the three-dimensional object is detected based on the continuities of the luminance differences. In this way, the detection accuracy of the three-dimensional object can be improved. [0147] In addition, in the three-dimensional object detection apparatus 1 of the third modality, the edge intensities of the attention lines La are calculated from the differences in continuous luminances obtained when establishing the vertical imaginary line and the three-dimensional object is detected with based on edge intensities. In this way, even when the edges extending in the vertical direction appear as weak edges in the image because of the image capture environment for the three-dimensional object and others, the three-dimensional object detection apparatus 1 can suppress a failure to detect the object three-dimensional. Specifically, even when the edges that appear in the aerial view image and that extend in the vertical direction in real space are weak, the three-dimensional object detection apparatus 1 can detect the three-dimensional object based on the edge intensities when collecting the weak edges. in a large amount. In this way, the three-dimensional object detection apparatus 1 can suppress a situation where the detection accuracy of the three-dimensional object deteriorates because of the image capture environment and others. [0148] The modalities described above are certain examples of the present invention. In this way, the present invention is not limited to the modalities described above. As usual, modalities other than those described above are possible and several changes can be made depending on the project and others more within the scope of the technical concept of the present invention. [0149] For example, in the modalities described above, the calculator 20 includes the viewing point conversion part 21 to generate the aerial view image data. However, the present invention is not limited to this configuration. There is definitely no need to create the aerial view image data as long as processing similar to that in the modalities described above is performed for the captured image data. INDUSTRIAL APPLICABILITY [0150] The present invention can be used in industrial fields in which a three-dimensional object is detected in the environment. LIST OF REFERENCE SYMBOLS 1 three-dimensional object detection apparatus 10 camera 20 calculator 21 part of viewing point conversion 22 part of luminance difference calculation 23 part of edge line detection 24 part of three-dimensional object detection 25 part of edge intensity calculation
权利要求:
Claims (4) [0001] 1. Three-dimensional object detection device (1) to detect the three-dimensional object around a vehicle, the device FEATURED by the fact that it comprises: an image capture device (10) configured to capture an image of a predetermined area; a viewing point conversion unit (21) configured to perform viewing point conversion processing on the image captured by the image capture device (10) to create an aerial view image; a luminance difference calculator (22) configured to establish a line segment extending in a vertical direction in a real space as a first vertical imaginary line (La) in the aerial view image and a second vertical imaginary line (Lr ) that is far from the first vertical imaginary line (La) by a predetermined distance in real space and extending in the vertical direction in real space, establish a pixel (Pa, i) for each of a plurality of positions along of the first vertical imaginary line (La) and a pixel (Pr, i) in such a way that the pixel (Pr, i) and the pixel (Pa, i) are located respectively at the same height in real space, and calculate a difference of luminance between the two pixels (Pa, i, Pr, i) established for each of the plurality of positions; an edge line detector (23) configured to add an attribute to each of the positions on the first vertical imaginary line (La) based on the luminance difference of the position on the first vertical imaginary line (La) and to detect a line border based on attribute continuities; and a three-dimensional object detector (24) configured to detect the three-dimensional object based on an amount of the edge lines detected by the edge line detector (23). [0002] 2. Three-dimensional object detection device (1) according to claim 1, CHARACTERIZED by the fact that when a change in pixel luminance along any of the edge lines detected by the edge line detector (23 ) is greater than a predetermined value, the three-dimensional object detector (24) avoids using the border line to detect the three-dimensional object. [0003] 3. Three-dimensional object detection device (1), according to claim 1, CHARACTERIZED by the fact that it additionally comprises: an edge intensity calculator (25) configured to calculate an edge intensity of the first vertical imaginary line based in the luminance difference calculated by the luminance difference calculator (22), in which the three-dimensional object detector (24) detects the three-dimensional object based on a sum of the edge intensities calculated by the edge intensity calculator (25). [0004] 4. Three-dimensional object detection method to detect the three-dimensional object around a vehicle, the method CHARACTERIZED by the fact that it comprises: capturing an image of a predetermined area; perform point of view conversion processing on the captured image to create an aerial view image; establish a line segment extending in a vertical direction in a real space as a first vertical imaginary line (La) in the aerial view image and a second vertical imaginary line (Lr) that is far from the first vertical imaginary line (La) by a predetermined distance in real space and extending in the vertical direction in real space, establish a pixel (Pa, i) for each of a plurality of positions along the first vertical imaginary line (La) and a pixel (Pr, i ) such that the pixel (Pr, i) and the pixel (Pa, i) are located at the same height in real space respectively, and calculate a luminance difference between the two pixels (Pa, i, Pr, i) established for each of the plurality of positions; add an attribute to each of the positions on the first vertical imaginary line (La) based on the luminance difference of the position on the first vertical imaginary line (La) and detect an edge line based on the continuities of the attributes; and detecting the three-dimensional object based on an amount of the edge lines detected by the edge line detector (23).
类似技术:
公开号 | 公开日 | 专利标题 BR112013003851B1|2021-03-09|device and three-dimensional object detection method US10262216B2|2019-04-16|Hazard detection from a camera in a scene with moving shadows US10102434B2|2018-10-16|Lane detection system and method Mei et al.2009|A constant-time efficient stereo slam system BR112014001837B1|2021-04-20|moving body detection devices and methods JP2015165376A|2015-09-17|Apparatus and method for recognizing lane JP5776795B2|2015-09-09|Three-dimensional object detection device WO2014033936A1|2014-03-06|Image processing device, image processing method, and image processing program BR112013019405B1|2021-04-27|DIRECTION ASSISTANCE DEVICE AND METHOD US9117115B2|2015-08-25|Exterior environment recognition device and exterior environment recognition method US9189691B2|2015-11-17|Three-dimensional object detection device and three-dimensional object detection method KR20150022076A|2015-03-04|Image processing method for vehicle camera and image processing apparatus usnig the same US9734416B2|2017-08-15|Object detection method, information processing device, and storage medium KR101236223B1|2013-02-22|Method for detecting traffic lane JP2012159469A|2012-08-23|Vehicle image recognition device BR112014020407B1|2021-09-14|THREE-DIMENSIONAL OBJECT DETECTION DEVICE US10339394B2|2019-07-02|Step detection device and step detection method JP6515704B2|2019-05-22|Lane detection device and lane detection method JP2015207211A|2015-11-19|Vehicle detection device and system, and program TW201619577A|2016-06-01|Image monitoring system and method JP2017033366A|2017-02-09|Road boundary detector, self-position estimation device, and road boundary detecting method JPWO2014017602A1|2016-07-11|Three-dimensional object detection apparatus and three-dimensional object detection method BR112015001872B1|2021-11-03|VEHICLE IMAGE RECOGNITOR JP6413319B2|2018-10-31|Vehicle detection device, system, and program JP2019066412A|2019-04-25|Range-finding device
同族专利:
公开号 | 公开日 KR101448411B1|2014-10-07| MY160274A|2017-02-28| KR20130052614A|2013-05-22| CN103080976B|2015-08-05| CN103080976A|2013-05-01| EP2608149B1|2021-04-21| JPWO2012023412A1|2013-10-28| US10397544B2|2019-08-27| WO2012023412A1|2012-02-23| BR112013003851A2|2016-07-05| EP2608149A1|2013-06-26| JP5413516B2|2014-02-12| EP2608149A4|2018-01-24| RU2540849C2|2015-02-10| US20130141542A1|2013-06-06| RU2013112017A|2014-09-27| MX2013001976A|2013-04-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP2946727B2|1990-10-26|1999-09-06|日産自動車株式会社|Obstacle detection device for vehicles| US6430303B1|1993-03-31|2002-08-06|Fujitsu Limited|Image processing apparatus| JP3226415B2|1994-04-13|2001-11-05|三菱電機株式会社|Automotive mobile object recognition device| US5889878A|1994-07-29|1999-03-30|Kabushiki Kaisha Toshiba|Moving object image pickup apparatus| US5790086A|1995-01-04|1998-08-04|Visualabs Inc.|3-D imaging system| JP3568621B2|1995-04-20|2004-09-22|株式会社日立製作所|Map display device| US5963664A|1995-06-22|1999-10-05|Sarnoff Corporation|Method and system for image combination using a parallax-based technique| JP3419968B2|1995-09-12|2003-06-23|株式会社東芝|Image recognition device and method| JP2000161915A|1998-11-26|2000-06-16|Matsushita Electric Ind Co Ltd|On-vehicle single-camera stereoscopic vision system| JP2000214256A|1999-01-28|2000-08-04|Mazda Motor Corp|Display device of vehicle| US6392218B1|2000-04-07|2002-05-21|Iteris, Inc.|Vehicle rain sensor| US6411898B2|2000-04-24|2002-06-25|Matsushita Electric Industrial Co., Ltd.|Navigation device| DE10035223A1|2000-07-20|2002-01-31|Daimler Chrysler Ag|Device and method for monitoring the surroundings of an object| JP3645196B2|2001-02-09|2005-05-11|松下電器産業株式会社|Image synthesizer| JP4861574B2|2001-03-28|2012-01-25|パナソニック株式会社|Driving assistance device| US6999620B1|2001-12-10|2006-02-14|Hewlett-Packard Development Company, L.P.|Segmenting video input using high-level feedback| US7003136B1|2002-04-26|2006-02-21|Hewlett-Packard Development Company, L.P.|Plan-view projections of depth image data for object tracking| JP3871614B2|2002-06-12|2007-01-24|松下電器産業株式会社|Driving assistance device| US8180099B2|2002-07-16|2012-05-15|Trw Limited|Rain detection apparatus and method| US20050031169A1|2003-08-09|2005-02-10|Alan Shulman|Birds eye view virtual imaging for real time composited wide field of view| JP4207717B2|2003-08-26|2009-01-14|株式会社日立製作所|Personal authentication device| SE0302596D0|2003-09-30|2003-09-30|Ericsson Telefon Ab L M|Improvements in or relating to base stations| US7356408B2|2003-10-17|2008-04-08|Fuji Jukogyo Kabushiki Kaisha|Information display apparatus and information display method| JP2005225250A|2004-02-10|2005-08-25|Murakami Corp|On-vehicle surveillance device| DE102005013920B4|2004-03-26|2007-12-13|Mitsubishi Jidosha Kogyo K.K.|Front view monitoring apparatus| JP4259368B2|2004-03-26|2009-04-30|三菱自動車工業株式会社|Nose view monitor device| US7298247B2|2004-04-02|2007-11-20|Denso Corporation|Vehicle periphery monitoring system| JP2005311666A|2004-04-21|2005-11-04|Auto Network Gijutsu Kenkyusho:Kk|Device for visually confirming periphery of vehicle| JP4963964B2|2004-09-15|2012-06-27|パナソニック株式会社|Object detection device| WO2006035755A1|2004-09-28|2006-04-06|National University Corporation Kumamoto University|Method for displaying movable-body navigation information and device for displaying movable-body navigation information| US7860340B2|2004-11-04|2010-12-28|Nec Corporation|Three-dimensional shape estimation system and image generation system| JP2006218019A|2005-02-09|2006-08-24|Canon Inc|Vein image acquiring apparatus, biological image acquiring device, and personal authentication system using the same| JP4596978B2|2005-03-09|2010-12-15|三洋電機株式会社|Driving support system| US7742634B2|2005-03-15|2010-06-22|Omron Corporation|Image processing method, three-dimensional position measuring method and image processing apparatus| JP4656977B2|2005-03-25|2011-03-23|セコム株式会社|Sensing device| JP4353127B2|2005-04-11|2009-10-28|株式会社デンソー|Rain sensor| JP4661339B2|2005-05-11|2011-03-30|マツダ株式会社|Moving object detection device for vehicle| JP2006339960A|2005-06-01|2006-12-14|Nissan Motor Co Ltd|Device and method for object detection| RU2295772C1|2005-09-26|2007-03-20|Пензенский государственный университет |Method for generation of texture in real time scale and device for its realization| JP4934308B2|2005-10-17|2012-05-16|三洋電機株式会社|Driving support system| JP4899424B2|2005-11-04|2012-03-21|トヨタ自動車株式会社|Object detection device| JP4516516B2|2005-12-07|2010-08-04|本田技研工業株式会社|Person detection device, person detection method, and person detection program| JP4585456B2|2006-01-23|2010-11-24|株式会社東芝|Blur conversion device| JP5022609B2|2006-02-27|2012-09-12|日立オートモティブシステムズ株式会社|Imaging environment recognition device| JP4696991B2|2006-03-22|2011-06-08|日産自動車株式会社|Motion detection method and motion detection apparatus| JP4901275B2|2006-04-07|2012-03-21|富士重工業株式会社|Travel guidance obstacle detection device and vehicle control device| JP2007300559A|2006-05-02|2007-11-15|Alpine Electronics Inc|Vehicle peripheral image providing device and shadow correcting method in vehicle peripheral image| JP4887932B2|2006-06-23|2012-02-29|日産自動車株式会社|Road edge recognition device, automobile, and road edge recognition method| US7728879B2|2006-08-21|2010-06-01|Sanyo Electric Co., Ltd.|Image processor and visual field support device| US20080298642A1|2006-11-03|2008-12-04|Snowflake Technologies Corporation|Method and apparatus for extraction and matching of biometric detail| DE102006060045A1|2006-12-19|2008-06-26|Imi Intelligent Medical Implants Ag|Visual aid with three-dimensional image capture| JP4930046B2|2006-12-26|2012-05-09|日産自動車株式会社|Road surface discrimination method and road surface discrimination device| JP5179055B2|2006-12-26|2013-04-10|昭和電工株式会社|Group III nitride semiconductor manufacturing method, group III nitride semiconductor light emitting device manufacturing method, group III nitride semiconductor light emitting device, and lamp| JP4654208B2|2007-02-13|2011-03-16|日立オートモティブシステムズ株式会社|Vehicle environment recognition device| JP2008219063A|2007-02-28|2008-09-18|Sanyo Electric Co Ltd|Apparatus and method for monitoring vehicle's surrounding| JP2008227646A|2007-03-09|2008-09-25|Clarion Co Ltd|Obstacle detector| JP2010522379A|2007-03-21|2010-07-01|ルミダイムインコーポレイテッド|Biometric authentication based on locally invariant features| JP4378571B2|2007-05-31|2009-12-09|Necシステムテクノロジー株式会社|MAP CHANGE DETECTION DEVICE, MAP CHANGE DETECTION METHOD, AND PROGRAM| JP4924896B2|2007-07-05|2012-04-25|アイシン精機株式会社|Vehicle periphery monitoring device| JP5003395B2|2007-10-05|2012-08-15|日産自動車株式会社|Vehicle periphery image processing apparatus and vehicle periphery state presentation method| US8503728B2|2007-10-30|2013-08-06|Nec Corporation|Road marking image processing device, road marking image processing method, and program| JP5057936B2|2007-11-09|2012-10-24|アルパイン株式会社|Bird's-eye image generation apparatus and method| JP4956452B2|2008-01-25|2012-06-20|富士重工業株式会社|Vehicle environment recognition device| JP5109691B2|2008-01-31|2012-12-26|コニカミノルタホールディングス株式会社|Analysis device| JP5108605B2|2008-04-23|2012-12-26|三洋電機株式会社|Driving support system and vehicle| JP2009266136A|2008-04-29|2009-11-12|Mitsubishi Electric Corp|Road structure abnormality detector| JP5054612B2|2008-05-15|2012-10-24|株式会社日立製作所|Approaching object detection device and approaching object detection method| JP5253017B2|2008-07-03|2013-07-31|アルパイン株式会社|Perimeter monitoring device, obstacle detection method, and computer program| US8025408B2|2008-07-08|2011-09-27|Panasonic Corporation|Method, apparatus and program for image processing and method and apparatus for image synthesizing| US8229178B2|2008-08-19|2012-07-24|The Hong Kong Polytechnic University|Method and apparatus for personal identification using palmprint and palm vein| JP5253066B2|2008-09-24|2013-07-31|キヤノン株式会社|Position and orientation measurement apparatus and method| JP2010081273A|2008-09-26|2010-04-08|Mitsuba Corp|Wiper apparatus| JP5178454B2|2008-10-28|2013-04-10|パナソニック株式会社|Vehicle perimeter monitoring apparatus and vehicle perimeter monitoring method| JP5067632B2|2008-11-28|2012-11-07|アイシン精機株式会社|Bird's-eye image generator| JP2010141836A|2008-12-15|2010-06-24|Sanyo Electric Co Ltd|Obstacle detecting apparatus| JP5068779B2|2009-02-27|2012-11-07|現代自動車株式会社|Vehicle surroundings overhead image display apparatus and method| JP4806045B2|2009-03-12|2011-11-02|本田技研工業株式会社|Lane recognition device| US8306269B2|2009-03-12|2012-11-06|Honda Motor Co., Ltd.|Lane recognition device| TW201044284A|2009-06-09|2010-12-16|Egis Technology Inc|Image sensing device adapted to flat surface design| TWI371382B|2009-07-31|2012-09-01|Automotive Res & Testing Ct| DE102010015079A1|2010-04-15|2011-10-20|Valeo Schalter Und Sensoren Gmbh|A method of displaying an image on a display device in a vehicle. Driver assistance system and vehicle| RU97092U1|2010-04-21|2010-08-27|Руслан Владимирович Ситников|VEHICLE SPACE ENVIRONMENT MONITORING SYSTEM| MX321872B|2011-02-21|2014-07-11|Nissan Motor|Periodic stationary object detection device and periodic stationary object detection method.| MY171030A|2011-09-12|2019-09-23|Nissan Motor|Three-dimensional object detection device| JP5924399B2|2012-02-23|2016-05-25|日産自動車株式会社|Three-dimensional object detection device| RU2619724C2|2012-02-23|2017-05-17|Ниссан Мотор Ко., Лтд.|Device for detecting three-dimensional objects| RU2616538C2|2012-03-01|2017-04-17|Ниссан Мотор Ко., Лтд.|Device for detecting three-dimensional objects| EP2821957B1|2012-03-02|2020-12-23|Nissan Motor Co., Ltd|Three-dimensional object detection device| MY179728A|2012-03-02|2020-11-12|Nissan Motor|Three-dimensional object detection device| CN104246821B|2012-04-16|2016-08-17|日产自动车株式会社|Three-dimensional body detection device and three-dimensional body detection method| CN104509101B|2012-07-27|2017-12-01|日产自动车株式会社|Three-dimensional body detection means and three-dimensional body detection method| US9569675B2|2012-07-27|2017-02-14|Nissan Motor Co., Ltd.|Three-dimensional object detection device, and three-dimensional object detection method| US9726883B2|2012-07-27|2017-08-08|Nissan Motor Co., Ltd|Three-dimensional object detection device and foreign matter detection device| US9804386B2|2012-07-27|2017-10-31|Nissan Motor Co., Ltd.|Camera device, three-dimensional object detection device, and lens cleaning method| JP6020567B2|2012-07-27|2016-11-02|日産自動車株式会社|Three-dimensional object detection apparatus and three-dimensional object detection method| MY158375A|2012-07-27|2016-09-26|Nissan Motor|Three-dimensional object detection device, three-dimensional object detection method| MY156198A|2012-07-27|2016-01-20|Nissan Motor|Three-dimensional object detection device| IN2015KN00408A|2012-07-27|2015-07-17|Nissan Motor| US9559546B2|2013-10-01|2017-01-31|Google Inc.|Stand inductive charger|JPS6030679B2|1979-10-09|1985-07-17|Kayaku Koseibutsushitsu Kenkyusho Kk| JP5459324B2|2012-01-17|2014-04-02|株式会社デンソー|Vehicle periphery monitoring device| US9830519B2|2012-03-01|2017-11-28|Nissan Motor Co., Ltd.|Three-dimensional object detection device| MY179728A|2012-03-02|2020-11-12|Nissan Motor|Three-dimensional object detection device| EP2821957B1|2012-03-02|2020-12-23|Nissan Motor Co., Ltd|Three-dimensional object detection device| CN104246821B|2012-04-16|2016-08-17|日产自动车株式会社|Three-dimensional body detection device and three-dimensional body detection method| IN2015KN00314A|2012-07-27|2015-07-10|Nissan Motor| JP6003986B2|2012-07-27|2016-10-05|日産自動車株式会社|Three-dimensional object detection device, three-dimensional object detection method, and foreign object detection device| US9569675B2|2012-07-27|2017-02-14|Nissan Motor Co., Ltd.|Three-dimensional object detection device, and three-dimensional object detection method| JP6003987B2|2012-07-27|2016-10-05|日産自動車株式会社|Three-dimensional object detection apparatus and three-dimensional object detection method| MY158375A|2012-07-27|2016-09-26|Nissan Motor|Three-dimensional object detection device, three-dimensional object detection method| US9726883B2|2012-07-27|2017-08-08|Nissan Motor Co., Ltd|Three-dimensional object detection device and foreign matter detection device| JP5999183B2|2012-07-27|2016-09-28|日産自動車株式会社|Three-dimensional object detection apparatus and three-dimensional object detection method| MY156198A|2012-07-27|2016-01-20|Nissan Motor|Three-dimensional object detection device| CN103802725B|2012-11-06|2016-03-09|无锡维森智能传感技术有限公司|A kind of new vehicle carried driving assistant images generation method| JP2014209680A|2013-04-16|2014-11-06|富士通株式会社|Land boundary display program, method, and terminal device| US20150286878A1|2014-04-08|2015-10-08|Bendix Commercial Vehicle Systems Llc|Generating an Image of the Surroundings of an Articulated Vehicle| EP3455783A1|2016-05-12|2019-03-20|Basf Se|Recognition of weed in a natural environment| US10528815B2|2016-12-31|2020-01-07|Vasuyantra Corp.|Method and device for visually impaired assistance| EP3596617A1|2017-03-17|2020-01-22|Unity IPR APS|Method and system for automated camera collision and composition preservation| CN110622226A|2017-05-16|2019-12-27|日产自动车株式会社|Method and device for predicting operation of travel assistance device| JP6958163B2|2017-09-20|2021-11-02|株式会社アイシン|Display control device|
法律状态:
2018-12-26| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-09-29| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]| 2021-02-09| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-03-09| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 29/07/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JP2010183645|2010-08-19| JP2010-183645|2010-08-19| PCT/JP2011/067456|WO2012023412A1|2010-08-19|2011-07-29|Three-dimensional object detection device and three-dimensional object detection method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|